我们提出了一种方法,可以在神经SDF渲染器中相对于几何场景参数自动计算正确的梯度。最近基于物理的可区分渲染技术用于网格采样来处理不连续性,尤其是在对象轮廓上,但是SDF没有简单的参数形式,可用于采样。取而代之的是,我们的方法建立在区域采样技术的基础上,并为SDFS开发了连续的翘曲功能,以解决这些不连续性。我们的方法利用了在SDF中编码的表面的距离,并在球形示踪剂点上使用正交来计算此翘曲功能。我们进一步表明,这可以通过对要点进行次采样来使神经SDF的方法进行。我们可区分的渲染器可用于优化从多视图图像中的神经形状,并对最近基于SDF的反向渲染方法产生可比较的3D重建,而无需2D分割掩码来指导几何形状优化,而无需对几何形状进行体积近似。
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
Generating a chain of thought (CoT) can increase large language model (LLM) performance on a wide range of tasks. Zero-shot CoT evaluations, however, have been conducted primarily on logical tasks (e.g. arithmetic, commonsense QA). In this paper, we perform a controlled evaluation of zero-shot CoT across two sensitive domains: harmful questions and stereotype benchmarks. We find that using zero-shot CoT reasoning in a prompt can significantly increase a model's likelihood to produce undesirable output. Without future advances in alignment or explicit mitigation instructions, zero-shot CoT should be avoided on tasks where models can make inferences about marginalized groups or harmful topics.
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
We present SLATE, a sequence labeling approach for extracting tasks from free-form content such as digitally handwritten (or "inked") notes on a virtual whiteboard. Our approach allows us to create a single, low-latency model to simultaneously perform sentence segmentation and classification of these sentences into task/non-task sentences. SLATE greatly outperforms a baseline two-model (sentence segmentation followed by classification model) approach, achieving a task F1 score of 84.4\%, a sentence segmentation (boundary similarity) score of 88.4% and three times lower latency compared to the baseline. Furthermore, we provide insights into tackling challenges of performing NLP on the inking domain. We release both our code and dataset for this novel task.
translated by 谷歌翻译
Modern multi-agent reinforcement learning frameworks rely on centralized training and reward shaping to perform well. However, centralized training and dense rewards are not readily available in the real world. Current multi-agent algorithms struggle to learn in the alternative setup of decentralized training or sparse rewards. To address these issues, we propose a self-supervised intrinsic reward ELIGN - expectation alignment - inspired by the self-organization principle in Zoology. Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations. This allows the agents to learn collaborative behaviors without any external reward or centralized training. We demonstrate the efficacy of our approach across 6 tasks in the multi-agent particle and the complex Google Research football environments, comparing ELIGN to sparse and curiosity-based intrinsic rewards. When the number of agents increases, ELIGN scales well in all multi-agent tasks except for one where agents have different capabilities. We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries. These results identify tasks where expectation alignment is a more useful strategy than curiosity-driven exploration for multi-agent coordination, enabling agents to do zero-shot coordination.
translated by 谷歌翻译
大多数从功能磁共振成像(fMRI)数据估算大脑功能连接性的方法依赖于计算统计依赖性的某些度量,或者更一般地,单变量代表性的时间序列(ROIS)(ROI)由多个Voxels组成。但是,总结ROI的多个时间序列具有其平均值或第一个主成分(1pc)可能导致信息丢失,例如,1PC仅解释了神经元活动的多变量信号的一小部分。我们建议在不使用代表性时间序列的情况下直接比较ROI,并根据Wasserstein距离定义了ROI之间的新的多元连通性量度,不一定由相同数量的体素组成。我们在自闭症筛查任务上评估了拟议的Wasserstein功能连接度量,证明了其优越性优于常用单变量和多元功能连通性测量。
translated by 谷歌翻译
近年来,商业上可用和负担得起的四足动物机器人激增,其中许多平台在研究和行业中都被积极使用。随着腿部机器人的可用性的增长,对这些机器人能够执行有用技能的控制器的需求也是如此。但是,大多数用于控制器开发的基于学习的框架都集中在培训机器人特定的控制器上,该过程需要为每个新机器人重复。在这项工作中,我们引入了一个用于训练四足机器人的广义运动(Genloco)控制器的框架。我们的框架合成了可以部署在具有相似形态的各种四足动物的机器人上的通用运动控制器。我们提出了一种简单但有效的形态随机化方法,该方法在程序上生成了一组训练的模拟机器人。我们表明,通过对这套模拟机器人进行训练,我们的模型获得了更多的通用控制策略,这些策略可以直接转移到具有多种形态的新型模拟和真实世界机器人中,在训练过程中未观察到。
translated by 谷歌翻译
现在,整个研究社区都可以广泛使用机器学习(ML),它促进了这些新兴的数学技术在广泛学科中的新型和引人注目的应用的扩散。在本文中,我们将重点介绍一个特定的案例研究:古人类学领域,该领域旨在根据生物学和文化证据理解人类的演变。正如我们将表明的那样,ML算法的易用性以及在人类学研究界的适当使用方面缺乏专业知识,导致了整个文献中出现的基本错误应用。结果不可靠的结果不仅破坏了将ML合法纳入人类学研究的努力,而且还会对我们的人类进化和行为过去产生潜在的理解。本文的目的是简要介绍古人类学中ML的某些方式;我们还为那些与该领域完全熟悉的人提供了一些基本ML算法的调查,而该领域仍在积极发展。我们讨论了一系列的错误,错误和违反正确的ML方法方案的行为,这些方法经常在人类学文献的积累体内出现令人不安。这些错误包括使用过时的算法和实践;不适当的火车/测试拆分,样本组成和文本解释;以及由于缺乏数据/代码共享以及随后对独立复制的限制而缺乏透明度。我们断言,扩大样本,共享数据和代码,重新评估同行评审的方法,以及最重要的是,开发包括ML专家在内的跨学科团队对于将ML在人类学中纳入ML的未来研究的进步都是必要的。
translated by 谷歌翻译